18 research outputs found

    Feedforward and feedback processes in vision

    Get PDF
    [No abstract available

    The involvement of centralized and distributed processes in sub-second time interval adaptation: an ERP investigation of apparent motion

    Get PDF
    Accumulating evidence suggests that the timing of brief stationary sounds affects visual motion perception. Recent studies have shown that auditory time interval can alter apparent motion perception not only through concurrent stimulation but also through brief adaptation. The adaptation after-effects for auditory time intervals was found to be similar to those for visual time intervals, suggesting the involvement of a central timing mechanism. To understand the nature of cortical processes underlying such after-effects, we adapted observers to different time intervals using either brief sounds or visual flashes and examined the evoked activity to the subsequently presented visual apparent motion. Both auditory and visual time interval adaptation led to significant changes in the ERPs elicited by the apparent motion. However, the changes induced by each modality were in the opposite direction. Also, they mainly occurred in different time windows and clustered over distinct scalp sites. The effects of auditory time interval adaptation were centred over parietal and parieto-central electrodes while the visual adaptation effects were mostly over occipital and parieto-occipital regions. Moreover, the changes were much more salient when sounds were used during the adaptation phase. Taken together, our findings within the context of visual motion point to auditory dominance in the temporal domain and highlight the distinct nature of the sensory processes involved in auditory and visual time interval adaptation. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Lt

    Grouping by feature of cross-modal flankers in temporal ventriloquism

    Get PDF
    Signals in one sensory modality can influence perception of another, for example the bias of visual timing by audition: temporal ventriloquism. Strong accounts of temporal ventriloquism hold that the sensory representation of visual signal timing changes to that of the nearby sound. Alternatively, underlying sensory representations do not change. Rather, perceptual grouping processes based on spatial, temporal, and featural information produce best-estimates of global event properties. In support of this interpretation, when feature-based perceptual grouping conflicts with temporal information-based in scenarios that reveal temporal ventriloquism, the effect is abolished. However, previous demonstrations of this disruption used long-range visual apparent-motion stimuli. We investigated whether similar manipulations of feature grouping could also disrupt the classical temporal ventriloquism demonstration, which occurs over a short temporal range. We estimated the precision of participants’ reports of which of two visual bars occurred first. The bars were accompanied by different cross-modal signals that onset synchronously or asynchronously with each bar. Participants’ performance improved with asynchronous presentation relative to synchronous - temporal ventriloquism - however, unlike the long-range apparent motion paradigm, this was unaffected by different combinations of cross-modal feature, suggesting that featural similarity of cross-modal signals may not modulate cross-modal temporal influences in short time scales

    Rapid motion adaptation reveals the temporal dynamics of spatiotemporal correlation between ON and OFF pathways

    Get PDF
    At the early stages of visual processing, information is processed by two major thalamic pathways encoding brightness increments (ON) and decrements (OFF). Accumulating evidence suggests that these pathways interact and merge as early as in primary visual cortex. Using regular and reverse-phi motion in a rapid adaptation paradigm, we investigated the temporal dynamics of within and across pathway mechanisms for motion processing. When the adaptation duration was short (188 ms), reverse-phi and regular motion led to similar adaptation effects, suggesting that the information from the two pathways are combined efficiently at early-stages of motion processing. However, as the adaption duration was increased to 752 ms, reverse-phi and regular motion showed distinct adaptation effects depending on the test pattern used, either engaging spatiotemporal correlation between the same or opposite contrast polarities. Overall, these findings indicate that spatiotemporal correlation within and across ON-OFF pathways for motion processing can be selectively adapted, and support those models that integrate within and across pathway mechanisms for motion processing

    A Feedback Neural Network for Small Target Motion Detection in Cluttered Backgrounds

    Get PDF
    Small target motion detection is critical for insects to search for and track mates or prey which always appear as small dim speckles in the visual field. A class of specific neurons, called small target motion detectors (STMDs), has been characterized by exquisite sensitivity for small target motion. Understanding and analyzing visual pathway of STMD neurons are beneficial to design artificial visual systems for small target motion detection. Feedback loops have been widely identified in visual neural circuits and play an important role in target detection. However, if there exists a feedback loop in the STMD visual pathway or if a feedback loop could significantly improve the detection performance of STMD neurons, is unclear. In this paper, we propose a feedback neural network for small target motion detection against naturally cluttered backgrounds. In order to form a feedback loop, model output is temporally delayed and relayed to previous neural layer as feedback signal. Extensive experiments showed that the significant improvement of the proposed feedback neural network over the existing STMD-based models for small target motion detection

    Hierarchical decision-making produces persistent differences in learning performance

    Get PDF
    Human organizations are commonly characterized by a hierarchical chain of command that facilitates division of labor and integration of effort. Higher-level employees set the strategic frame that constrains lower-level employees who carry out the detailed operations serving to implement the strategy. Typically, strategy and operational decisions are carried out by different individuals that act over different timescales and rely on different kinds of information. We hypothesize that when such decision processes are hierarchically distributed among different individuals, they produce highly heterogeneous and strongly path-dependent joint learning dynamics. To investigate this, we design laboratory experiments of human dyads facing repeated joint tasks, in which one individual is assigned the role of carrying out strategy decisions and the other operational ones. The experimental behavior generates a puzzling bimodal performance distribution-some pairs learn, some fail to learn after a few periods. We also develop a computational model that mirrors the experimental settings and predicts the heterogeneity of performance by human dyads. Comparison of experimental and simulation data suggests that self-reinforcing dynamics arising from initial choices are sufficient to explain the performance heterogeneity observed experimentally

    Static sound timing alters sensitivity to low-level visual motion

    No full text

    Auditory modulation of visual apparent motion with short spatial and temporal intervals

    No full text

    Effects of contrast polarity in paracontrast masking

    No full text

    Short- and long-term forms of neural adaptation: An ERP investigation of dynamic motion aftereffects

    No full text
    Adaptation is essential to interact with a dynamic and changing environment, and can be observed on different timescales. Previous studies on a motion paradigm called dynamic motion aftereffect (dMAE) showed that neural adaptation can establish even in very short timescales. However, the neural mechanisms underlying such rapid form of neural plasticity is still debated. In the present study, short- and long-term forms of neural plasticity were investigated using dynamic motion aftereffect combined with EEG (Electroencephalogram). Participants were adapted to directional drifting gratings for either short (640 msec) or long (6.4 sec) durations. Both adaptation durations led to motion aftereffects on the perceived direction of a dynamic and directionally ambiguous test pattern, but the long adaptation produced stronger dMAE. In line with behavioral results, we found robust changes in the event-related potentials elicited by the dynamic test pattern within 64–112 msec time range. These changes were mainly clustered over occipital and parieto-occipital scalp sites. Within this time range, the aftereffects induced by long adaptation were stronger than those by short adaptation. Moreover, the aftereffects by each adaptation duration were in the opposite direction. Overall, these EEG findings suggest that dMAEs reflect changes in cortical areas mediating low- and mid-level visual motion processing. They further provide evidence that short- and long-term forms of motion adaptation lead to distinct changes in neural activity, and hence support the view that adaptation is an active time-dependent process which involves different neural mechanisms
    corecore